Goto

Collaborating Authors

 new supercomputer


Google says it accessed parallel universes with its new supercomputer

Daily Mail - Science & tech

Google's quantum computing breakthrough on Monday has left the physicist who heads the project a believer in'the idea that we live in a multiverse.' 'Willow,' the tech giant's new quantum chip, succeeded in solving a computational problem so complex it would have taken today's best super-computers an estimated 10 septillion years to solve it -- vastly more than the age of our entire universe. But Google said its new quantum computer solved the puzzle'in under five minutes.' Calling Willow's performance'astonishing,' the leader and founder of Google Quantum AI team, physicist Hartmut Neven, said its high-speed result'lends credence to the notion that quantum computation occurs in many parallel universes.' Neven credited Oxford University physicist David Deutsch for proposing the theory that the successful development of quantum computing would, in effect, affirm the'many worlds interpretation' of quantum mechanics and the existence of a multiverse. Starting in the 1970s, Deutsch, in fact, had walked backwards into becoming a pioneer in the field of quantum computing, less out of interest in the technology itself, than his desire to test the multiverse theory.


10 Best AI Stocks (Artificial Intelligence) To Buy In 2023 And Beyond - Hashtag Investing

#artificialintelligence

Click Here to See if you Qualify. AI, automation, and robotics have created an exciting age of disruptive innovation. Companies worldwide invest in AI products and services to stay ahead of their competition. Therefore, investing in AI stocks is a great way to profit from this growth sector; AI stocks combine the excitement and promise of AI with the potential for sustainable returns. With AI-based technologies becoming more ubiquitous daily, AI stocks offer investors a chance to gain exposure to companies with AI and automation at their core.


Nvidia and Microsoft to build massive AI cloud computer - Sleuth Technical

#artificialintelligence

Help us grow:) Please share our articles for reach. Nvidia announced a collaboration with Microsoft to build a "massive" cloud computer focused on AI. Their plan is to use tens of thousands of high-end Nvidia GPUs for applications like deep learning and large language models. The companies aim to make it one of the most powerful AI supercomputers in the world. Meanwhile, Microsoft will contribute its Azure cloud infrastructure and ND- and NC-series virtual machines.


Nvidia and Microsoft team up to build massive AI cloud computer

#artificialintelligence

On Wednesday, Nvidia announced a collaboration with Microsoft to build a "massive" cloud computer focused on AI. It will reportedly use tens of thousands of high-end Nvidia GPUs for applications like deep learning and large language models. The companies aim to make it one of the most powerful AI supercomputers in the world. In turn, the new supercomputer will feature thousands of units of what is arguably the most powerful GPU in the world, the Hopper H100, which Nvidia launched in October. Nvidia will also provide its second most powerful GPU, the A100, and utilize its Quantum-2 InfiniBand networking platform, which can transfer data at 400 gigabits per second between servers, linking them together into a powerful cluster.


Tesla Shows Off Its Brand New AI-Training Supercomputer

#artificialintelligence

Tesla's Senior Director of AI, Andrej Karpathy, unveiled the electric vehicle automaker's new supercomputer during a presentation at the 2021 Conference on Computer Vision and Pattern Recognition (CVPR). Last year, Elon Musk highlighted Tesla's plans to build a "beast" of a neural network training supercomputer called "Dojo". For several years, the company has been teasing its Dojo supercomputer, which Musk has hinted will be the world's fastest supercomputer, outperforming the current world leader, Japan's Fugaku supercomputer which runs at 415 petaflops. The new supercomputer seems to be a predecessor to the Dojo project, with Karpathy stating that it is the number five supercomputer in the world in terms of floating-point operations per second (FLOPS). This supercomputer is certainly not lacking in the processing department.


"Quantum Supremacy" - China's New Supercomputer "10 Billion Times Faster" Than Google's

#artificialintelligence

America is locked in a quantum computer race with China. The latest developments from Chinese scientists show a "significant computing breakthrough, achieving quantum computational advantage," according to state media Xinhua News Agency. Thursday, China's top quantum research group published a new research paper in the journal Science, titled "Quantum computational advantage using photons," outlines how a quantum computer prototype detected up to 76 photons through Gaussian boson sampling (GBS), a standard simulation algorithm, Xinhua said, adding that its ability to process complex problems is exponentially faster than most supercomputers. Called "Jiuzhang," the supercomputer prototype can conduct large-scale GBS 100 trillion times faster than the world's fastest supercomputer. Researchers said their prototype processes 10 billion times faster than the 53-qubit quantum computer developed by Google.


Microsoft's new supercomputer will train AI to outperform humans

#artificialintelligence

Microsoft has teamed up with a startup co-founded by Elon Musk to build one of the fastest supercomputers in the world, the company announced Tuesday during its annual Build developers conference -- held virtually this year because of the coronavirus pandemic. The startup is OpenAI, the charter of which underscores that it's working to ensure that AI which can outperform humans nevertheless benefits all of humanity. Microsoft stressed that this work represents a key milestone in a partnership announced last year to jointly create new supercomputing technologies in Azure. This is a first step, the computing giant explained, toward debuting large AI models "and the infrastructure needed to train them" as a platform that developers and other organizations can build on. "The exciting thing about these models is the breadth of things they're going to enable," said Microsoft Chief Technical Officer Kevin Scott in a company blog post about the news.


UQ's new supercomputer is pushing the limits in analysing human skull models ZDNet

#artificialintelligence

The University of Queensland (UQ) is leveraging the power of its new supercomputer to analyse human skull models, with the work conducted being dedicated towards delaying the onset of one of the world's most debilitating illnesses -- Alzheimer's Disease. Courtesy of Dell Technologies, UQ's new high performance computer (HPC) system is being used by the Research Computing Centre (RCC). The system, dubbed Weiner, is capable of processing massive amounts of computational tasks in parallel, including data visualisation and machine learning, which allows for the modelling of possible treatments for illnesses. See also: Photos: The world's 25 fastest supercomputers (TechRepublic) Speaking with media at the Dell Technologies Forum in Sydney on Tuesday, UQ RCC chief technology officer Jake Carroll said the centre boasts a wide range of employees, from physicists through to computer scientists, even those specialising in humanities. "People from all walks of research need to be able to participate and integrate with these things," Carroll said.


MIT Lincoln Laboratory Supercomputing Center Installs World's Fastest Supercomputer at a University, powered by NVIDIA V100 GPUs - NVIDIA Developer News Center

#artificialintelligence

To power AI applications and research across engineering, science, and medicine, the Massachusetts Institute of Technology (MIT) Lincoln Laboratory Supercomputing Center has just installed a new GPU-accelerated supercomputer, powered by 896 NVIDIA Tensor Core V100 GPUs. According to MIT, the new system named TX-GAIA for Green AI Accelerator was ranked by TOP500 as the most powerful AI supercomputer at any university in the world. "We are thrilled by the opportunity to enable researchers across Lincoln and MIT to achieve incredible scientific and engineering breakthroughs," said Jeremy Kepner, a Lincoln Laboratory Fellow who heads the Lincoln Laboratory Supercomputing Center. "TX-GAIA will play a large role in supporting AI, physical simulation, and data analysis across all Laboratory missions," he added. The new supercomputer has a peak performance of 100 AI petaFLOPs, as measured by the computing speed required to perform mixed-precision floating-point operations commonly used in building deep neural networks.


HPE Deploys TX-GAIA Supercomputer at MIT Lincoln Laboratory - insideHPC

#artificialintelligence

Today HPE announced announced the deployment of a new supercomputer at the MIT Lincoln Laboratory Supercomputing Center for compute-intensive AI applications and bolstering research across engineering, science, and medicine. Called TX-GAIA (Green AI Accelerator), the new supercomputer converges HPC and AI to support workloads such as modeling and simulation and perform complex deep neural networks (DNN) and other machine learning training. It is based on the HPE Apollo 2000 system, which is purpose-built for HPC and optimized for AI, by integrating the latest Intel Xeon Scalable processors and NVIDIA GPU accelerators. At the MIT Lincoln Laboratory Supercomputing Center, our mission is to solve the nation's hardest technical challenges by advancing computationally intensive science, engineering, and medicine," said Jeremy Kepner, head and founder, at MIT Lincoln Laboratory Supercomputing Center (LLSC). "By collaborating with HPC leaders like HPE, we are expanding technical capabilities to run emerging AI workloads in our supercomputer and accelerate innovation." The new supercomputer has a measured performance of 4.725 Petaflops and will be used to support research projects that will fuel innovation in weather forecasting, medical data analysis, autonomous systems, synthetic DNA design, and new materials and devices. Additionally, the MIT Lincoln Laboratory Supercomputing Center's new system gets an AI performance boost, as measured by the computing speed required to perform DNNs, of a peak performance of 100 AI Petaflops. This will greatly accelerate the processing of deep neural networks and other compute-intensive AI workloads in order to improve training in areas such as image recognition, speech and natural language processing and computer vision. The TX-GAIA system comprises nearly 900 Intel processors and 900 Nvidia GPU accelerators. The new system is housed in a modular data center facility, co-developed with HPE and designed to speed deployment and reduce overall IT resources. It is located in Holyoke, Massachusetts, where it is powered by abundant green energy, and will go into production in the fall of 2019. We've seen strong industry demand for scalable performance to train higher volumes of AI that will advance science and engineering, and make breakthroughs across industries," said Bill Mannel, vice president and general manager, HPC and AI at HPE. "Our continued partnership with MIT Lincoln Laboratory Supercomputing Center extends the power of our HPC technologies to boost AI R&D and create new experiences."